4 research outputs found
Methods of High-Fidelity, High-Efficiency Class-D Audio Amplification
Gallium nitride-based field effect transistors (FETs) have opened a path for full-frequency-range class-D audio amplifiers with low distortion and noise, thanks to their ability to switch at much higher frequencies than that of the upper range of human hearing. Compared to traditional silicon-based transistors, GaN-based transistors offer superior efficiencies, particularly at power levels below their maxima. Paired with an analog-to-digital converter, digital signal processor, and pulse-code modulation to pulse-width modulation converter, these transistors are used to design and implement a solid-state amplifier capable of generating 100 watts of output through speakers with an impedance of 8 ohms using a 1-volt line-level input. This digital signal processor, together with Analog Devices’s SigmaStudio development software, allows for equalization, filtering, and other modification of the signal in this design. Together, these equalization features, the use of GaN transistors, and various digital encoding methods are examined for their benefits in producing high-power, high-fidelity audio in small packages
Randomized Histogram Matching: A Simple Augmentation for Unsupervised Domain Adaptation in Overhead Imagery
Modern deep neural networks (DNNs) are highly accurate on many recognition
tasks for overhead (e.g., satellite) imagery. However, visual domain shifts
(e.g., statistical changes due to geography, sensor, or atmospheric conditions)
remain a challenge, causing the accuracy of DNNs to degrade substantially and
unpredictably when testing on new sets of imagery. In this work, we model
domain shifts caused by variations in imaging hardware, lighting, and other
conditions as non-linear pixel-wise transformations, and we perform a
systematic study indicating that modern DNNs can become largely robust to these
types of transformations, if provided with appropriate training data
augmentation. In general, however, we do not know the transformation between
two sets of imagery. To overcome this, we propose a fast real-time unsupervised
training augmentation technique, termed randomized histogram matching (RHM). We
conduct experiments with two large benchmark datasets for building segmentation
and find that despite its simplicity, RHM consistently yields similar or
superior performance compared to state-of-the-art unsupervised domain
adaptation approaches, while being significantly simpler and more
computationally efficient. RHM also offers substantially better performance
than other comparably simple approaches that are widely used for overhead
imagery.Comment: Includes a main paper (10 pages). This paper is currently undergoing
peer revie
Randomized Histogram Matching: A Simple Augmentation for Unsupervised Domain Adaptation in Overhead Imagery
Modern deep neural networks (DNNs) are highly accurate on many recognition tasks for overhead (e.g., satellite) imagery. However, visual domain shifts (e.g., statistical changes due to geography, sensor, or atmospheric conditions) remain a challenge, causing the accuracy of DNNs to degrade substantially and unpredictably when testing on new sets of imagery. In this work, we model domain shifts caused by variations in imaging hardware, lighting, and other conditions as nonlinear pixel-wise transformations, and we perform a systematic study indicating that modern DNNs can become largely robust to these types of transformations, if provided with appropriate training data augmentation. In general, however, we do not know the transformation between two sets of imagery. To overcome this, we propose a fast real-time unsupervised training augmentation technique, termed randomized histogram matching (RHM). We conduct experiments with two large benchmark datasets for building segmentation and find that despite its simplicity, RHM consistently yields similar or superior performance compared to state-of-the-art unsupervised domain adaptation approaches, while being significantly simpler and more computationally efficient. RHM also offers substantially better performance than other comparably simple approaches that are widely used for overhead imagery
Segment anything, from space?
Recently, the first foundation model developed specifically for vision tasks
was developed, termed the "Segment Anything Model" (SAM). SAM can segment
objects in input imagery based upon cheap input prompts, such as one (or more)
points, a bounding box, or a mask. The authors examined the zero-shot image
segmentation accuracy of SAM on a large number of vision benchmark tasks and
found that SAM usually achieved recognition accuracy similar to, or sometimes
exceeding, vision models that had been trained on the target tasks. The
impressive generalization of SAM for segmentation has major implications for
vision researchers working on natural imagery. In this work, we examine whether
SAM's impressive performance extends to overhead imagery problems, and help
guide the community's response to its development. We examine SAM's performance
on a set of diverse and widely-studied benchmark tasks. We find that SAM does
often generalize well to overhead imagery, although it fails in some cases due
to the unique characteristics of overhead imagery and the target objects. We
report on these unique systematic failure cases for remote sensing imagery that
may comprise useful future research for the community. Note that this is a
working paper, and it will be updated as additional analysis and results are
completed.Comment: Working pape